首页> 外文OA文献 >Class-prior Estimation for Learning from Positive and Unlabeled Data
【2h】

Class-prior Estimation for Learning from Positive and Unlabeled Data

机译:从正数和未标记数据中学习的先验先验估计

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

We consider the problem of estimating the class prior in an unlabeleddataset. Under the assumption that an additional labeled dataset is available,the class prior can be estimated by fitting a mixture of class-wise datadistributions to the unlabeled data distribution. However, in practice, such anadditional labeled dataset is often not available. In this paper, we show that,with additional samples coming only from the positive class, the class prior ofthe unlabeled dataset can be estimated correctly. Our key idea is to useproperly penalized divergences for model fitting to cancel the error caused bythe absence of negative samples. We further show that the use of the penalized$L_1$-distance gives a computationally efficient algorithm with an analyticsolution. The consistency, stability, and estimation error are theoreticallyanalyzed. Finally, we experimentally demonstrate the usefulness of the proposedmethod.
机译:我们考虑在未标记的数据集中先估计类的问题。在附加标签数据集可用的假设下,可以通过将按类别数据分布的混合拟合到未标记数据分布来估计先验类。但是,实际上,这样的附加标签数据集通常不可用。在本文中,我们表明,只有额外的样本仅来自阳性类别,才能正确估计未标记数据集的类别优先级。我们的关键思想是对模型拟合使用适当的罚分,以消除由于缺少负样本而导致的误差。我们进一步证明,使用罚分的$ L_1 $距离可提供具有analyticsolution的高效计算算法。从理论上分析了一致性,稳定性和估计误差。最后,我们通过实验证明了该方法的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号